Linear Prediction and Subspace Fitting Blind Channel Identification Based on Cyclic Statistics

نویسندگان

  • Luc Deneire
  • Dirk T.M. Slock
چکیده

Blind channel identification and equalization based on second-order statistics by subspace fitting and linear prediction have received a lot of attention lately. On the other hand, the use of cyclic statistics in fractionally sampled channels has also raised considerable interest. We propose to use these statistics in subspace fitting and linear prediction for (possibly multiuser and multiple antennas) channel identification. We base our identification schemes on the cyclic statistics, using the stationary multivariate representation introduced by [2] and [4] [5]. This leads to the use of all cyclic statistics. The methods proposed appear to have good performance. 1. PROBLEM POSITION We consider a communication system with p emitters and a receiver constituted of an array of M antennas. The signals received are oversampled by a factor m w.r.t. the symbol rate. The channel is FIR of duration NT=m where T is the symbol duration. The received signal can be written as : x(n) = 1 X k= 1h(k)u(n k) + v(n) = 1 X k= 1h(n km)ak + v(n) where u(n) = 1 X k= 1 ak (n km) The received signal x(n) and noise v(n) are a M 1 vectors. x(n) is cyclostationary with period m whereas v(n) is assumed not to be cyclostationary with period m. h(k) has dimension M p, a(k) and u(k) have dimensions p 1. 2. CYCLIC STATISTICS Following the assumptions hereabove, the correlations : Rxx(n; ) = E x(n)xH(n ) are cyclic in n with period m (H denotes complex conjugate tranpose). One can easily express them as: Rxx(n; ) = 1 X = 1 1 X = 1 h(n m)Raa( )hH (n m + m ) +Rvv( ) We then express the kth cyclic correlation as : Rfkg xx ( ) 4 = 1 m m 1 Xl=0 Rxx(l; )e | 2 lk m = Ek x(l)xH(l ) whose value is : Rfkg xx ( ) = 1 m 1 X = 1 1 X = 1 h( )Raa( ) hH ( + m )e | 2 k m +Rvv( ) (k) We can introduce a cyclic correlation matrix as : Rfkg xx 4 = 26664 Rfkg xx (0) Rfkg xx (1) Rfkg xx (K 1) Rfkg xx ( 1) Rfkg xx (0) Rfkg xx (K 2) .. .. . . . .. Rfkg xx (1 K) Rfkg xx (2 K) Rfkg xx (0) 37775 = TK(HNDfk;pg DFT )Rfkg uu T H K (HN ) + (k)Rvv where Rfkg uu = Raa Im and is a block Kronecker product, the first matrix is a block matrix and the second matrix is an elementwise matrix. TK(HN ) is the convolution matrix of HN = [h(0)T h(1)T h(N 1)T ]T and Dfk;pg DFT = blockdiag[Ipje |2 k m Ipj je | 2 (N 1)k m Ip] 3. GLADYSHEV’S THEOREM AND MIAMEE PROCESS Gladyshev’s theorem [2] states that : Theorem 1 Function Rxx(n; ) is the correlation function of some PCS (Periodically Correlated Sequence) iff the matrix-valued function : R( ) = hRfkk0g xx ( )im 1 k;k0=0 where Rfkk0g xx ( ) = Rfk k0g xx ( )e2 |k =m is the matricial correlation function of some m-variate stationary sequence. 1The work of Luc Deneire is supported by the EC by a Marie-Curie Fellowship (TMR program) under contract No ERBFMBICT950155 Reminding that Rfkg xx ( ) = Rfm kgH xx ( ), the following matrix R 4 = 26664 R(0) R(1) R(K 1) R( 1) R(0) R(K 2) .. .. . . . .. R(1 K) R(2 K) R(0) 37775 is an hermitian K K block Toeplitz matrix of Mm Mm blocks. Then, Miamee [4] gives us the explicit expression of the multivariate stationary process associated : Zn = Zkn m 1 k=0 where Zkn = m 1 j=0 x(n+j)e2 |k(n+j)=m where is the direct sum, i.e., noting w = e2 |=m Zkn = wkn[x(n); x(n+1)wk ; ; x(n+m 1)wk(m 1)] is defined in a Hilbert space, where the correlation is the following euclidian product : < Zkn;Zk0 n+l >= m 1 Xj=0 EnZkn(j)Zk0 n+lH(j)o and Zn = [Z0nTZ1nT Zm 1 n T ]T with the classical correlation for multivariate stationary processes. On the other hand, Miamee gives the link between the linear prediction on Zn and the cyclic AR model of x(n). 4. EXPRESSION OF Zn w.r.t. u(n) and h(n) From Zkn = m 1 j=0 x(n+ j)e2 |k(n+j)=m and x(n+ j) = L 1 Xk=0 h(k)u(n+ j k) + v(n+ j) = HN 26664 u(n+ j) u(n+ j + 1) .. u(n+ j N + 1) 37775 +v(n + j) Defining Un+j = [u(n+j)T u(n+j N+1)T ]T and Hfkg N = [w kjh(j)]N 1 j=0 we express the Miamee process as : Zkn = m 1 j=0 (Hf kg N wknUn+j + v(n+ j)e2 |k n+j m ) = Hf kg N wkn[UnUn+1 Un+m 1] +m 1 j=0 v(n + j)e2 |k n+j m ) Zn = HtotU(n) + V(n) where we noted Htot = [Hf0gT N Hf 1gT N Hf1 mgT N ]T , U(n) = Dfn;pNg DFT [UnUn+1 Un+m 1] and V(n) = 26664 v(n) v(n+m 1) v(n)wn v(n+m 1)wn+m 1 .. . . . .. v(n)wn(m 1) v(n+m 1)w(m 1)(n+m 1) 37775 ) Z = TL+N 1(Htot)UL + VL (1) where UL = [U(n)]0n=L 1 clearly is a stationary process whose correlation matrix can easily be deduced from Raa. Based on relation (1), we apply the classical subspace fitting and linear prediction channel identification schemes, as detailed below. 5. SIGNAL SUBSPACE FITTING We recall briefly the signal subspace fitting (noise subspace based) blind channel identification algorithm hereunder. One can write the (compact form of the) SVD of the cyclocorrelation matrix R = UDVH with the relations: rangefUg = rangefVg = rangefTK(Htot)g We have assumed that TK(Htot) is full rank, which leads to the usual identifiability condition. We can then solve the classical subspace fitting problem : min Htot;T jjTK(Htot) UT jj2F If we introduce U? such that [UU?] is a unitary matrix, this leads to min Htot Httot "KMm X i=D? TN (U?Ht i )T H N (U?Ht i )#HHt tot where U?i is a KMm2 1, D? = N+K and superscript t denotes the transposition of the blocks of a block matrix. Under constraint jjHtotjj = 1, Ĥttot is then the eigenvector corresponding to the minimum eigenvalue of the matrix between brackets. One can lower the computational burden by using D? > N +K (see a.o. [6]). The case p > 1 can be (partially) solved in a manner similar to [7] and [3]. 6. LINEAR PREDICTION We consider the denoised case. The correlation matrix is then computed as follows. Rf0g xx;sb = Rf0g xx RVV ( ) yields : [RVV ( )]i;j = m 1 Xl=0 E v(n+ l)vH(n + l + ) wi(n+l)w j(n+l+ ) = Rvv( )wn(i j) j m 1 Xl=0 w(i j)l = Rvv( )wn(i j) j m ij = m ijRvv( )w j Hence RVV ( ) = Rvv( ) blockdiag[IM jw IM jw2 IM j jw(m 1) IM ], which, in R, corresponds to the noise contribution of the zero cyclic frequency cyclic correlation. From equation (1) and noting ZK(n 1) = [Zj ]j=n K j=n 1 , the predicted quantities are : Ẑ(n)jZK(n 1) = p1Zn 1 + + pKZn K ~ Z(n) = Z(n) Ẑ(n)jZK(n 1) Following [9], we rewrite the correlation matrix as R = Ro rK rHK RK 1 this yields the prediction filter : PK 4 = [p1 pK ] = rKR 1 K 1 and the prediction error variance : ~ Z;K = Ro PKrHK where the inverse might be replaced by the MoorePenrose pseudoinverse, and still yield a consistent channel estimate. Another way of being robust to order overestimation would be to use the Levinson-WigginsRobinson (LWR) algorithm to find the prediction quantities and estimate the order with this algorithm. Lots of ways are possible to go from the prediction quantites to the channel estimate ([8] and [1]). For our purpose, we used the simple suboptimal solution hereunder. From the prediction error equation, it is easy to derive : HTt totTK([IMmPtK ]) = [HTt tot0] Hence IMmK HTt totTK([IMmPtK ]) = [HTt tot(0)0] ; and Htot is found by minimizing IMm(K 1) HTt totTK(PtK) and is thus the left singular vector corresponding to the minimum singular value of this matrix. This solution corresponds to a “plain least-squares” solution, and is robust w.r.t. order overestimation. This also means that it is not the best solution available, but this discussion is beyond the scope of this paper. 7. COMPUTATIONAL ASPECTS It is obvious that the correlation matrix R built from the cyclic correlations is bigger (in fact each scalar in R is replaced by a m m block in R) than the corresponding matrix built from the classical Time Series representation of oversampled stationary signals. This fact must be balanced with the stronger structure that is cast in our correlation matrix. In fact, one can (not so easily) prove that the estimates Ĥ f kg N are strictly related (i.e. Ĥ f kg N = [w kjĥ(j)]N 1 j=0 for all k), which indicates us that this structure should lead to reduced complexity algorithms w.r.t. the original ones. When developing the expressions in detail, this is particularly obvious in linear prediction, where the prediction filter has some strong structure (which is also visible in [5]). 8. SIMULATIONS In our simulations, we restrict ourselves to the p = 1 case, using a randomly generated real channel of length 6T, an oversampling factor of m = 3 and M = 3 antennas. We draw the NRMSE of the channel, defined as NRMSE =vuut 1 100 100 Xl=1 jjĥ(l) hjj2F =jjhjj2F where ĥ (l) is the estimated channel in the lth trial. In the figures below, the NRMSE in dB has been calculated as 20 log 10(NRMSE). The correlation matrix is calculated from a burst of 100 QAM-4 symbols (note that if we used real sources, we would have used the conjugate cyclocorrelation, wich is another means of getting rid of the noise, provided it is circular). For these simulations, we used 100 MonteCarlo runs. 8.1. Subspace fitting The estimations of 25 realisations, for an SNR of 20 dB, are reproduced hereunder. 0 2 4 6 8 10 12 14 16 18 0 0 2 4 6 8 10 12 14 16 18 0 0 2 4 6 8 10 12 14 16 18 −0.5 0 0.5 For comparison, we used the same algorithm for the classical Time Series representation of the oversampled signal. The results hereunder show a better performance for the classic approach, which is due to the fact that we used the same complexity for both algorithms (same matrix size), which results in a lower noise subspace size for the cyclic approach. In theory, when one uses the same subspace size, as there is a one for one correspondance between the elements of the classic correlation matrix and the elements of the cyclic correlation matrix, the performances should be equal. The third curve illustrates this fact. 0 5 10 15 20 25 −40 −35 −30 −25 −20 −15 −10 −5 SNR in dB N R M S E in d B NRMSE for subspace fitting + : cyclic correlation x : cyclic correlation, increased complexity o : classic correlation 8.2. Linear prediction For the linear prediction, we expect to have a slightly better performance in the cyclic approach then in the classic approach. Indeed, in the classic approach, if we use for example M = 1 antenna and an oversampling factor of m = 3, we predict [x(n)x(n 1)x(n 2)]T based on [x(n 3)x(n 4) ]T , whereas in the cyclic approach we predict the scalar x(n) based on [x(n 1)x(n 2)x(n 3) ]T . The corresponding prediction filter thus captures little more prediction features in the cyclic case. On the other hand, the noise contribution being only present in the zero cyclic frequency cyclic correlation, we expect a better behavior of the method if we don’t take the noise into account in the correlation matrix (i.e. we don’t estimate the noise variance before doing the linear prediction). Those expectations are confirmed by the following simulations, note that the mention LP on cyclic statistics refers to the use of R where the noise contribution has been removed, whereas the mention LP on cyclic statistics, no ”denoising” refers to the use of the plain correlation matrix. 0 5 10 15 20 25 −35 −30 −25 −20 −15 −10 −5 SNR N R M S E ( in d B ) Channel NRMSE using Linear Prediction techniques + : LP on stationary statistics x : LP on cyclic statistics x : LP on cyclic statistics, no "denoising"

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Blind Channel Identi cation Based On Cyclic Statistics

Abstract : Blind channel identi cation and equalization based on second-order statistics by subspace tting and linear prediction have received a lot of attention lately. On the other hand, the use of cyclic statistics in fractionally sampled channels has also raised considerable interest. We propose to use these statistics in subspace tting and linear prediction for (possibly multiuser and mult...

متن کامل

A blind multichannel identification algorithm robust to order overestimation

Active research in blind single input multiple output (SIMO) channel identification has led to a variety of second-order statistics-based algorithms, particularly the subspace (SS) and the linear prediction (LP) approaches. The SS algorithm shows good performance when the channel output is corrupted by noise and available for a finite time duration. However, its performance is subject to exact ...

متن کامل

Blind channel equalization using weighted subspace methods

This paper addresses the problems of blind channel estimation and symbol detection with second order statistics methods from the received data. It can be shown that this problem is similar to Direction Of Arrival (DOA) estimation, where many solutions like the MUSIC algorithm or “weighted” techniques (as Deterministic Maximum Likelihood or Weighted Subspace Fitting method) have been developed. ...

متن کامل

Blind identification and order estimation of FIR communications channels using cyclic statistics

In this contribution we address the problem of the blind joint identification and order estimation of a non-minimum phase FIR communication channel by exploiting the cyclostationarity of the received signal sampled at rate greater of the symbol rate. We show that the identification can be formulated as a “subspace fitting” problem; this allows for using the subspace distance as a test statistic...

متن کامل

A New High-order Takagi-Sugeno Fuzzy Model Based on Deformed Linear Models

Amongst possible choices for identifying complicated processes for prediction, simulation, and approximation applications, high-order Takagi-Sugeno (TS) fuzzy models are fitting tools. Although they can construct models with rather high complexity, they are not as interpretable as first-order TS fuzzy models. In this paper, we first propose to use Deformed Linear Models (DLMs) in consequence pa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1997